-
Notifications
You must be signed in to change notification settings - Fork 272
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix threading and closing of servers #495
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice! Would be nice to have a test for a clean shutdown to avoid regressions in the future.
@ordian good point, let me try to come up with something. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is not ok. Not sure exactly why, but it seems to handle a single req at a time (which amounts to a weird experience when using e.g. MyCrypto that holds on to a connection for the whole session: all other requests block).
Not sure exactly what change causes this but when
Not a big deal but maybe we can dial it down from |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
I think it would be useful to have some module level docs that clearly states how threads are configured. And a test. ;)
http/examples/http_async.rs
Outdated
@@ -9,6 +9,7 @@ fn main() { | |||
|
|||
let server = ServerBuilder::new(io) | |||
.cors(DomainsValidation::AllowOnly(vec![AccessControlAllowOrigin::Null])) | |||
.threads(8) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
.threads(8) | |
.threads(4) |
Maybe a saner default?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed this altogether, pushed by mistake :)
http/src/tests.rs
Outdated
} | ||
std::thread::sleep(std::time::Duration::from_millis(10)); | ||
} | ||
assert!(false); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
assert!(false); | |
assert!(false, "expected server to be closed"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not panic directly?
Travis failure seems spurious (restarted) but appveyor is perhaps legit? |
Let's merge this? |
I think we should sort out the Travis failures first. |
@dvdplm I though it was spurious, can you confirm the test fails on macOS? |
I have one failing test in |
|
All good here as well. I just restarted the failing |
Oh, actually it depends on the rust compiler version. Just bumped my compiler to 1.38 and seeing issues with |
Contains paritytech/jsonrpc#495 with good bugfixes to resource usage.
* Upgrade to jsonrpc v14 Contains paritytech/jsonrpc#495 with good bugfixes to resource usage. * Bump tokio & futures. * Bump even further. * Upgrade tokio to 0.1.22 * Partially revert "Bump tokio & futures." This reverts commit 100907e.
* Upgrade to jsonrpc v14 Contains paritytech/jsonrpc#495 with good bugfixes to resource usage. * Bump tokio & futures. * Bump even further. * Upgrade tokio to 0.1.22 * Partially revert "Bump tokio & futures." This reverts commit 100907e.
* Upgrade to jsonrpc v14 Contains paritytech/jsonrpc#495 with good bugfixes to resource usage. * Bump tokio & futures. * Bump even further. * Upgrade tokio to 0.1.22 * Partially revert "Bump tokio & futures." This reverts commit 100907e.
* Upgrade to jsonrpc v14 Contains paritytech/jsonrpc#495 with good bugfixes to resource usage. * Bump tokio & futures. * Bump even further. * Upgrade tokio to 0.1.22 * Partially revert "Bump tokio & futures." This reverts commit 100907eb91907aa124d856d52374637256118e86.
* Upgrade to jsonrpc v14 Contains paritytech/jsonrpc#495 with good bugfixes to resource usage. * Bump tokio & futures. * Bump even further. * Upgrade tokio to 0.1.22 * Partially revert "Bump tokio & futures." This reverts commit 100907eb91907aa124d856d52374637256118e86.
) * Replace `tokio_core` with `tokio` (`ring` -> 0.13) (#9657) * Replace `tokio_core` with `tokio`. * Remove `tokio-core` and replace with `tokio` in - `ethcore/stratum` - `secret_store` - `util/fetch` - `util/reactor` * Bump hyper to 0.12 in - `miner` - `util/fake-fetch` - `util/fetch` - `secret_store` * Bump `jsonrpc-***` to 0.9 in - `parity` - `ethcore/stratum` - `ipfs` - `rpc` - `rpc_client` - `whisper` * Bump `ring` to 0.13 * Use a more graceful shutdown process in `secret_store` tests. * Convert some mutexes to rwlocks in `secret_store`. * Consolidate Tokio Runtime use, remove `CpuPool`. * Rename and move the `tokio_reactor` crate (`util/reactor`) to `tokio_runtime` (`util/runtime`). * Rename `EventLoop` to `Runtime`. - Rename `EventLoop::spawn` to `Runtime::with_default_thread_count`. - Add the `Runtime::with_thread_count` method. - Rename `Remote` to `Executor`. * Remove uses of `CpuPool` and spawn all tasks via the `Runtime` executor instead. * Other changes related to `CpuPool` removal: - Remove `Reservations::with_pool`. `::new` now takes an `Executor` as an argument. - Remove `SenderReservations::with_pool`. `::new` now takes an `Executor` as an argument. * Remove secret_store runtimes. (#9888) * Remove the independent runtimes from `KeyServerHttpListener` and `KeyServerCore` and instead require a `parity_runtime::Executor` to be passed upon creation of each. * Remove the `threads` parameter from both `ClusterConfiguration` structs. * Implement the `future::Executor` trait for `parity_runtime::Executor`. * Update tests. - Update the `loop_until` function to instead use a oneshot to signal completion. - Modify the `make_key_servers` function to create and return a runtime. * misc: bump license header to 2019 (#10135) * misc: bump license header to 2019 * misc: remove_duplicate_empty_lines.sh * misc: run license header script * commit cargo lock * Upgrade to jsonrpc v14 (#11151) * Upgrade to jsonrpc v14 Contains paritytech/jsonrpc#495 with good bugfixes to resource usage. * Bump tokio & futures. * Bump even further. * Upgrade tokio to 0.1.22 * Partially revert "Bump tokio & futures." This reverts commit 100907eb91907aa124d856d52374637256118e86. * Added README, CHANGELOG and several meta tags in Cargo.toml * Proper pr in changelog * Remove categories tag * Comments and usage fixed * Declare test usage for methods explicitly * Crate name in readme modified, complete removed * Test helpers feature added, functions marked as test only * Processed by fmt tool * Illustrative example added * Sample moved into the separate directory * use examples directory instead of custom crate * Wait till scanning completed * Timeout decreased * Unused methods removed Co-authored-by: Nick Sanders <[email protected]> Co-authored-by: 5chdn <[email protected]> Co-authored-by: David <[email protected]> Co-authored-by: Nikolay Volf <[email protected]>
This PRs attempts to clean up the spawned threads (Closes #425).
Currently every
RpcEventLoop
spawns a bunch of threads that depend on the number of cores (tokio runtime) + one thread that waits for the runtime to complete. Instead we just spawntokio::runtime
with one thread and a right name.Also we avoid calling
tokio::spawn
when handling incoming connections in various server implementations to ensure that when the server is closed all connections are dropped as well (currently, they are still being polled by the runtime so you have to wait for them to close or timeout).